5 research outputs found

    Sensor fusion in driving assistance systems

    Get PDF
    Mención Internacional en el título de doctorLa vida diaria en los países desarrollados y en vías de desarrollo depende en gran medida del transporte urbano y en carretera. Esta actividad supone un coste importante para sus usuarios activos y pasivos en términos de polución y accidentes, muy habitualmente debidos al factor humano. Los nuevos desarrollos en seguridad y asistencia a la conducción, llamados Advanced Driving Assistance Systems (ADAS), buscan mejorar la seguridad en el transporte, y a medio plazo, llegar a la conducción autónoma. Los ADAS, al igual que la conducción humana, están basados en sensores que proporcionan información acerca del entorno, y la fiabilidad de los sensores es crucial para las aplicaciones ADAS al igual que las capacidades sensoriales lo son para la conducción humana. Una de las formas de aumentar la fiabilidad de los sensores es el uso de la Fusión Sensorial, desarrollando nuevas estrategias para el modelado del entorno de conducción gracias al uso de diversos sensores, y obteniendo una información mejorada a partid de los datos disponibles. La presente tesis pretende ofrecer una solución novedosa para la detección y clasificación de obstáculos en aplicaciones de automoción, usando fusión vii sensorial con dos sensores ampliamente disponibles en el mercado: la cámara de espectro visible y el escáner láser. Cámaras y láseres son sensores comúnmente usados en la literatura científica, cada vez más accesibles y listos para ser empleados en aplicaciones reales. La solución propuesta permite la detección y clasificación de algunos de los obstáculos comúnmente presentes en la vía, como son ciclistas y peatones. En esta tesis se han explorado novedosos enfoques para la detección y clasificación, desde la clasificación empleando clusters de nubes de puntos obtenidas desde el escáner láser, hasta las técnicas de domain adaptation para la creación de bases de datos de imágenes sintéticas, pasando por la extracción inteligente de clusters y la detección y eliminación del suelo en nubes de puntos.Life in developed and developing countries is highly dependent on road and urban motor transport. This activity involves a high cost for its active and passive users in terms of pollution and accidents, which are largely attributable to the human factor. New developments in safety and driving assistance, called Advanced Driving Assistance Systems (ADAS), are intended to improve security in transportation, and, in the mid-term, lead to autonomous driving. ADAS, like the human driving, are based on sensors, which provide information about the environment, and sensors’ reliability is crucial for ADAS applications in the same way the sensing abilities are crucial for human driving. One of the ways to improve reliability for sensors is the use of Sensor Fusion, developing novel strategies for environment modeling with the help of several sensors and obtaining an enhanced information from the combination of the available data. The present thesis is intended to offer a novel solution for obstacle detection and classification in automotive applications using sensor fusion with two highly available sensors in the market: visible spectrum camera and laser scanner. Cameras and lasers are commonly used sensors in the scientific literature, increasingly affordable and ready to be deployed in real world applications. The solution proposed provides obstacle detection and classification for some obstacles commonly present in the road, such as pedestrians and bicycles. Novel approaches for detection and classification have been explored in this thesis, from point cloud clustering classification for laser scanner, to domain adaptation techniques for synthetic dataset creation, and including intelligent clustering extraction and ground detection and removal from point clouds.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Cristina Olaverri Monreal.- Secretario: Arturo de la Escalera Hueso.- Vocal: José Eugenio Naranjo Hernánde

    Context aided pedestrian detection for danger estimation based on laser scanner and computer vision

    Get PDF
    Road safety applications demand the most reliable sensor systems. In recent years, the advances in information technologies have led to more complex road safety applications able to cope with a high variety of situations. These applications have strong sensing requirements that a single sensor, with the available technology, cannot attain. Recent researches in Intelligent Transport Systems (ITS) try to overcome the limitations of the sensors by combining them. But not only sensor information is crucial to give a good and robust representation of the road environment; context information has a key role for reliable safety applications to provide reliable detection and complete situation assessment. This paper presents a novel approach for pedestrian detection using sensor fusion of laser scanner and computer vision. The application also takes advantage of context information, providing danger estimation for the pedestrians detected. Closing the loop, the danger estimation is later used, together with context information, as feedback to enhance the pedestrian detection process.This work was supported by the Spanish Government through the Cicyt projects (GRANT TRA2010-20225-C03-01), (TEC2011-28626-C02-02) and (GRANT TRA 2011-29454-C03-02), CAM through SEGVAUTO-II (S2009/DPI-1509) and mobility program of ‘‘Fundación Caja Madrid’’.Publicad

    Computer vision and laser scanner road environment perception

    Get PDF
    Data fusion procedure is presented to enhance classical Advanced Driver Assistance Systems (ADAS). The novel vehicle safety approach, combines two classical sensors: computer vision and laser scanner. Laser scanner algorithm performs detection of vehicles and pedestrians based on pattern matching algorithms. Computer vision approach is based on Haar-Like features for vehicles and Histogram of Oriented Gradients (HOG) features for pedestrians. The high level fusion procedure uses Kalman Filter and Joint Probabilistic Data Association (JPDA) algorithm to provide high level detection. Results proved that by means of data fusion, the performance of the system is enhanced.This work was supported by the Spanish Government through the Cicyt projects (GRANT TRA2010-20225-C03-01) and (GRANT TRA 2011-29454-C03-02). CAM through SEGAUTO-II (S2009IDPI-1509)

    Automatic laser and camera extrinsic calibration for data fusion using road plane

    Get PDF
    Driving Assistance Systems and Autonomous Driving applications require trustable detections. These demanding requirements need sensor fusion to provide information reliable enough. But data fusion presents the problem of data alignment in both rotation and translation. Laser scanner and video cameras are widely used in sensor fusion. Laser provides operation in darkness, long range detection and accurate measurement but lacks the means for reliable classification due to the limited information provided. The camera provides classification thanks to the amount of data provided but lacks accuracy for measurements and is sensitive to illumination conditions. Data alignment processes require supervised and accurate measurements, that should be performed by experts, or require specific patterns or shapes. This paper presents an algorithm for inter-calibration between the two sensors of our system, requiring only a flat surface for pitch and roll calibration and an obstacle visible for both sensors for determining the yaw. The advantage of this system is that it does not need any particular shape to be located in front of the vehicle apart from a flat surface, which is usually the road. This way, calibration can be achieved at virtually any time without human intervention.This work was supported by Automation Engineering Department from de La Salle University, Bogotá-Colombia; Administrative Department of Science, Technology and Innovation (COLCIENCIAS), Bogotá-Colombia and the Spanish Government through the Cicyt projects (GRANT TRA2010-20225-C03-01) and (GRANT TRA 2011-29454- C03-02)

    IVVI 2.0: An intelligent vehicle based on computational perception

    Get PDF
    This paper presents the IVVI 2.0 a smart research platform to foster intelligent systems in vehicles. Computational perception in intelligent transportation systems applications has advantages, such as huge data from vehicle environment, among others, so computer vision systems and laser scanners are the main devices that accomplish this task. Both have been integrated in our intelligent vehicle to develop cutting-edge applications to cope with perception difficulties, data processing algorithms, expert knowledge, and decision-making. The long-term in-vehicle applications, that are presented in this paper, outperform the most significant and fundamental technical limitations, such as, robustness in the face of changing environmental conditions. Our intelligent vehicle operates outdoors with pedestrians and others vehicles, and outperforms illumination variation, i.e.: shadows, low lighting conditions, night vision, among others. So, our applications ensure the suitable robustness and safety in case of a large variety of lighting conditions and complex perception tasks. Some of these complex tasks are overcome by the improvement of other devices, such as, inertial measurement units or differential global positioning systems, or perception architectures that accomplish sensor fusion processes in an efficient and safe manner. Both extra devices and architectures enhance the accuracy of computational perception and outreach the properties of each device separately.This work was supported by the Spanish Government through the CICYT projects (GRANT TRA2010 20225 C03 01) and (GRANT TRA 2011 29454 C03 02)
    corecore